Wearable sensor-based human activity recognition (HAR) has emerged as a principal research area and is utilized in a variety of applications. Recently, deep learning-based methods have achieved significant improvement in the HAR field with the development of human-computer interaction applications. However, they are limited to operating in a local neighborhood in the process of a standard convolution neural network, and correlations between different sensors on body positions are ignored. In addition, they still face significant challenging problems with performance degradation due to large gaps in the distribution of training and test data, and behavioral differences between subjects. In this work, we propose a novel Transformer-based Adversarial learning framework for human activity recognition using wearable sensors via Self-KnowledgE Distillation (TASKED), that accounts for individual sensor orientations and spatial and temporal features. The proposed method is capable of learning cross-domain embedding feature representations from multiple subjects datasets using adversarial learning and the maximum mean discrepancy (MMD) regularization to align the data distribution over multiple domains. In the proposed method, we adopt the teacher-free self-knowledge distillation to improve the stability of the training procedure and the performance of human activity recognition. Experimental results show that TASKED not only outperforms state-of-the-art methods on the four real-world public HAR datasets (alone or combined) but also improves the subject generalization effectively.
translated by 谷歌翻译
Online personalized recommendation services are generally hosted in the cloud where users query the cloud-based model to receive recommended input such as merchandise of interest or news feed. State-of-the-art recommendation models rely on sparse and dense features to represent users' profile information and the items they interact with. Although sparse features account for 99% of the total model size, there was not enough attention paid to the potential information leakage through sparse features. These sparse features are employed to track users' behavior, e.g., their click history, object interactions, etc., potentially carrying each user's private information. Sparse features are represented as learned embedding vectors that are stored in large tables, and personalized recommendation is performed by using a specific user's sparse feature to index through the tables. Even with recently-proposed methods that hides the computation happening in the cloud, an attacker in the cloud may be able to still track the access patterns to the embedding tables. This paper explores the private information that may be learned by tracking a recommendation model's sparse feature access patterns. We first characterize the types of attacks that can be carried out on sparse features in recommendation models in an untrusted cloud, followed by a demonstration of how each of these attacks leads to extracting users' private information or tracking users by their behavior over time.
translated by 谷歌翻译
Detecting out-of-distribution (OOD) inputs during the inference stage is crucial for deploying neural networks in the real world. Previous methods commonly relied on the output of a network derived from the highly activated feature map. In this study, we first revealed that a norm of the feature map obtained from the other block than the last block can be a better indicator of OOD detection. Motivated by this, we propose a simple framework consisting of FeatureNorm: a norm of the feature map and NormRatio: a ratio of FeatureNorm for ID and OOD to measure the OOD detection performance of each block. In particular, to select the block that provides the largest difference between FeatureNorm of ID and FeatureNorm of OOD, we create Jigsaw puzzle images as pseudo OOD from ID training samples and calculate NormRatio, and the block with the largest value is selected. After the suitable block is selected, OOD detection with the FeatureNorm outperforms other OOD detection methods by reducing FPR95 by up to 52.77% on CIFAR10 benchmark and by up to 48.53% on ImageNet benchmark. We demonstrate that our framework can generalize to various architectures and the importance of block selection, which can improve previous OOD detection methods as well.
translated by 谷歌翻译
The recent advent of play-to-earn (P2E) systems in massively multiplayer online role-playing games (MMORPGs) has made in-game goods interchangeable with real-world values more than ever before. The goods in the P2E MMORPGs can be directly exchanged with cryptocurrencies such as Bitcoin, Ethereum, or Klaytn via blockchain networks. Unlike traditional in-game goods, once they had been written to the blockchains, P2E goods cannot be restored by the game operation teams even with chargeback fraud such as payment fraud, cancellation, or refund. To tackle the problem, we propose a novel chargeback fraud prediction method, PU GNN, which leverages graph attention networks with PU loss to capture both the players' in-game behavior with P2E token transaction patterns. With the adoption of modified GraphSMOTE, the proposed model handles the imbalanced distribution of labels in chargeback fraud datasets. The conducted experiments on two real-world P2E MMORPG datasets demonstrate that PU GNN achieves superior performances over previously suggested methods.
translated by 谷歌翻译
深度学习取得了面部识别基准的出色性能,但是对于低分辨率(LR)图像,性能大大降低了。我们提出了一种注意力相似性知识蒸馏方法,该方法将作为教师的高分辨率(HR)网络获得的注意图转移到LR网络中,以提高LR识别性能。受到人类能够基于从HR图像获得的先验知识近似物体区域的人类的启发,我们设计了使用余弦相似性的知识蒸馏损失,以使学生网络的注意力类似于教师网络的注意力。在各种LR面部相关的基准上进行的实验证实了所提出的方法通常改善了LR设置上的识别性能,通过简单地传输良好的注意力图来优于最先进的结果。 https://github.com/gist-ailab/teaching-where-where-to-look在https://github.com/github.com/github.com/phis-look中公开可用。
translated by 谷歌翻译
在本文中,我们描述了RTZR团队Voxceleb扬声器识别挑战2022(VOXSRC-22)的最高得分提交,在封闭的数据集中,扬声器验证轨道1.最高执行的系统是7型型号的融合,其中包含3种不同类型的类型模型体系结构。我们专注于培训模型以学习周期性信息。因此,所有型号均以4-6秒的镜头训练,每次发言。此外,我们采用了较大的保证金微调策略,该策略在我们的某些融合模型的先前挑战上表现出良好的表现。在评估过程中,我们应用了具有自适应对称归一化(AS-NORM)和矩阵得分平均值(MSA)的评分方法。最后,我们将模型与逻辑回归混合在一起,以融合所有受过训练的模型。最终提交在VOXSRC22测试集上实现了0.165 DCF和2.912%EER。
translated by 谷歌翻译
拆分学习和推理建议运行跨客户设备和云的大型模型的培训/推理。但是,这样的模型拆分引起了隐私问题,因为流过拆分层的激活可能会泄漏有关客户端私人输入数据的信息。当前,没有一个好方法可以量化通过分层泄漏多少私人信息,也没有一种将隐私提高到所需级别的好方法。在这项工作中,我们建议将Fisher信息用作隐私指标来衡量和控制信息泄漏。我们表明,Fisher信息可以直观地理解以无偏重建攻击者的限制的错误形式通过拆分层泄漏了多少私人信息。然后,我们提出了一种增强隐私的技术REFIL,可以在拆分层上强制使用用户呈现的Fisher信息泄漏,以实现高隐私,同时保持合理的实用程序。
translated by 谷歌翻译
联合学习(FL)旨在对多个数据所有者持有的分布式数据执行隐私的机器学习。为此,FL要求数据所有者在本地执行培训,并与中央服务器共享梯度更新(而不是私人输入),然后将其安全地汇总在多个数据所有者上。尽管汇总本身并不能证明提供隐私保护,但先前的工作表明,如果批处理大小足够大,则足够了。在本文中,我们提出了鸡尾酒会攻击(CPA),与先前的信念相反,能够从汇总的渐变中恢复私人输入,这是批量较大的大小。 CPA利用了至关重要的见解,即来自完全连接的层的总梯度是其输入的线性组合,这使我们将梯度反演作为盲源分离(BSS)问题(非正式地称为鸡尾酒会问题)。我们适应独立的组件分析(ICA) - BSS问题的经典解决方案 - 恢复针对完全连接和卷积网络的私人输入,并表明CPA明显优于先前的梯度反转攻击,对成像网的输入量表,并表现出Imagenet大小的输入的范围最高可达1024的大批量。
translated by 谷歌翻译
与关节位置相比,在皮肤多人线性模型(SMPL)基于多视图图像的基于皮肤的多人线性模型(SMPL)的人网格重建中,关节旋转和形状估计的准确性相对较少。该领域的工作大致分为两类。第一种方法执行关节估计,然后通过将SMPL拟合到最终的接头来产生SMPL参数。第二种方法通过基于卷积神经网络(CNN)模型直接从输入图像中回归SMPL参数。但是,这些方法缺乏解决联合旋转和形状重建和网络学习难度的歧义的信息。为了解决上述问题,我们提出了一种两阶段的方法。提出的方法首先通过从输入图像中的基于CNN的模型估算网格顶点的坐标,并通过将SMPL模型拟合到估计的顶点来获取SMPL参数。估计的网格顶点提供了足够的信息来确定关节旋转和形状,并且比SMPL参数更容易学习。根据使用Human3.6M和MPI-INF-3DHP数据集的实验,所提出的方法在关节旋转和形状估计方面显着优于先前的作品,并在关节位置估计方面实现了竞争性能。
translated by 谷歌翻译
在许多视觉应用程序中,查找跨图像的对应是一项重要任务。最新的最新方法着重于以粗到精细的方式设计的基于端到端学习的架构。他们使用非常深的CNN或多块变压器来学习强大的表示,这需要高计算能力。此外,这些方法在不理解对象,图像内部形状的情况下学习功能,因此缺乏解释性。在本文中,我们提出了一个用于图像匹配的体系结构,该体系结构高效,健壮且可解释。更具体地说,我们介绍了一个名为toblefm的新型功能匹配模块,该模块可以大致将图像跨图像的空间结构大致组织到一个主题中,然后扩大每个主题内部的功能以进行准确的匹配。为了推断主题,我们首先学习主题的全局嵌入,然后使用潜在变量模型来检测图像结构将图像结构分配到主题中。我们的方法只能在共同可见性区域执行匹配以减少计算。在室外和室内数据集中进行的广泛实验表明,我们的方法在匹配性能和计算效率方面优于最新方法。该代码可在https://github.com/truongkhang/topicfm上找到。
translated by 谷歌翻译